5,772 research outputs found

    Masking Strategies for Image Manifolds

    Full text link
    We consider the problem of selecting an optimal mask for an image manifold, i.e., choosing a subset of the pixels of the image that preserves the manifold's geometric structure present in the original data. Such masking implements a form of compressive sensing through emerging imaging sensor platforms for which the power expense grows with the number of pixels acquired. Our goal is for the manifold learned from masked images to resemble its full image counterpart as closely as possible. More precisely, we show that one can indeed accurately learn an image manifold without having to consider a large majority of the image pixels. In doing so, we consider two masking methods that preserve the local and global geometric structure of the manifold, respectively. In each case, the process of finding the optimal masking pattern can be cast as a binary integer program, which is computationally expensive but can be approximated by a fast greedy algorithm. Numerical experiments show that the relevant manifold structure is preserved through the data-dependent masking process, even for modest mask sizes

    Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation

    Full text link
    We study the compressed sensing (CS) signal estimation problem where an input signal is measured via a linear matrix multiplication under additive noise. While this setup usually assumes sparsity or compressibility in the input signal during recovery, the signal structure that can be leveraged is often not known a priori. In this paper, we consider universal CS recovery, where the statistics of a stationary ergodic signal source are estimated simultaneously with the signal itself. Inspired by Kolmogorov complexity and minimum description length, we focus on a maximum a posteriori (MAP) estimation framework that leverages universal priors to match the complexity of the source. Our framework can also be applied to general linear inverse problems where more measurements than in CS might be needed. We provide theoretical results that support the algorithmic feasibility of universal MAP estimation using a Markov chain Monte Carlo implementation, which is computationally challenging. We incorporate some techniques to accelerate the algorithm while providing comparable and in many cases better reconstruction quality than existing algorithms. Experimental results show the promise of universality in CS, particularly for low-complexity sources that do not exhibit standard sparsity or compressibility.Comment: 29 pages, 8 figure

    Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    Get PDF
    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non-negative amplitude parameters to arbitrary complex ones, and (ii) we allow for mismatch between the manifold described by the parameters and its polar approximation. To quantify the improvements afforded by the proposed extensions, we evaluate six algorithms for estimation of parameters in sparse translation-invariant signals, exemplified with the time delay estimation problem. The evaluation is based on three performance metrics: estimator precision, sampling rate and computational complexity. We use compressive sensing with all the algorithms to lower the necessary sampling rate and show that it is still possible to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super-resolution algorithm. The algorithms studied here provide various tradeoffs between computational complexity, estimation precision, and necessary sampling rate. The work shows that compressive sensing for the class of sparse translation-invariant signals allows for a decrease in sampling rate and that the use of polar interpolation increases the estimation precision.Comment: 13 pages, 5 figures, to appear in IEEE Transactions on Signal Processing; minor edits and correction

    Conditioning of Random Block Subdictionaries with Applications to Block-Sparse Recovery and Regression

    Full text link
    The linear model, in which a set of observations is assumed to be given by a linear combination of columns of a matrix, has long been the mainstay of the statistics and signal processing literature. One particular challenge for inference under linear models is understanding the conditions on the dictionary under which reliable inference is possible. This challenge has attracted renewed attention in recent years since many modern inference problems deal with the "underdetermined" setting, in which the number of observations is much smaller than the number of columns in the dictionary. This paper makes several contributions for this setting when the set of observations is given by a linear combination of a small number of groups of columns of the dictionary, termed the "block-sparse" case. First, it specifies conditions on the dictionary under which most block subdictionaries are well conditioned. This result is fundamentally different from prior work on block-sparse inference because (i) it provides conditions that can be explicitly computed in polynomial time, (ii) the given conditions translate into near-optimal scaling of the number of columns of the block subdictionaries as a function of the number of observations for a large class of dictionaries, and (iii) it suggests that the spectral norm and the quadratic-mean block coherence of the dictionary (rather than the worst-case coherences) fundamentally limit the scaling of dimensions of the well-conditioned block subdictionaries. Second, this paper investigates the problems of block-sparse recovery and block-sparse regression in underdetermined settings. Near-optimal block-sparse recovery and regression are possible for certain dictionaries as long as the dictionary satisfies easily computable conditions and the coefficients describing the linear combination of groups of columns can be modeled through a mild statistical prior.Comment: 39 pages, 3 figures. A revised and expanded version of the paper published in IEEE Transactions on Information Theory (DOI: 10.1109/TIT.2015.2429632); this revision includes corrections in the proofs of some of the result

    Compressive Time Delay Estimation Using Interpolation

    Full text link
    Time delay estimation has long been an active area of research. In this work, we show that compressive sensing with interpolation may be used to achieve good estimation precision while lowering the sampling frequency. We propose an Interpolating Band-Excluded Orthogonal Matching Pursuit algorithm that uses one of two interpolation functions to estimate the time delay parameter. The numerical results show that interpolation improves estimation precision and that compressive sensing provides an elegant tradeoff that may lower the required sampling frequency while still attaining a desired estimation performance.Comment: 5 pages, 2 figures, technical report supporting 1 page submission for GlobalSIP 201
    • …
    corecore